Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
K-means clustering based on adaptive cuckoo optimization feature selection
Lin SUN, Menghan LIU
Journal of Computer Applications    2024, 44 (3): 831-841.   DOI: 10.11772/j.issn.1001-9081.2023030351
Abstract130)   HTML7)    PDF (2193KB)(111)       Save

The initial cluster number of the K-means clustering algorithm is randomly determined, a large number of redundant features are contained in the original datasets, which will lead to the decrease of clustering accuracy, and Cuckoo Search (CS) algorithm has the disadvantages of low convergence speed and weak local search. To address these issues, a K-means clustering algorithm combined with Dynamic CS Feature Selection (DCFSK) was proposed. Firstly, an adaptive step size factor was designed during the Levy flight phase to improve the search speed and accuracy of the CS algorithm. Then, to adjust the balance between global search and local search, and accelerate the convergence of the CS algorithm, the discovery probability was dynamically adjusted. An Improved Dynamic CS algorithm (IDCS) was constructed, and then a Dynamic CS-based Feature Selection algorithm (DCFS) was built. Secondly, to improve the calculation accuracy of the traditional Euclidean distance, a weighted Euclidean distance was designed to simultaneously consider the contribution of samples and features to distance calculation. To determine the selection scheme of the optimal number of clusters, the weighted intra-cluster and inter-cluster distances were constructed based on the improved weighted Euclidean distance. Finally, to overcome the defect that the objective function of the traditional K-means clustering only considers the distance within the clusters and does not consider the distance between the clusters, a objective function based on the contour coefficient of median was proposed. Thus, a K-means clustering algorithm based on the adaptive cuckoo optimization feature selection was designed. Experimental results show that, on ten benchmark test functions, IDCS achieves the best metrics. Compared to algorithms such as K-means and DBSCAN (Density-Based Spatial Clustering of Applications with Noise), DCFSK achieves the best clustering effects on six synthetic datasets and six UCI datasets.

Table and Figures | Reference | Related Articles | Metrics
Text classification based on pre-training model and label fusion
Hang YU, Yanling ZHOU, Mengxin ZHAI, Han LIU
Journal of Computer Applications    2024, 44 (3): 709-714.   DOI: 10.11772/j.issn.1001-9081.2023030340
Abstract193)   HTML18)    PDF (922KB)(198)       Save

Accurate classification of massive user text comment data has important economic and social benefits. Nowadays, in most text classification methods, text encoding method is used directly before various classifiers, while the prompt information contained in the label text is ignored. To address the above issues, a pre-training model based Text and Label Information Fusion Classification model based on RoBERTa (Robustly optimized BERT pretraining approach) was proposed, namely TLIFC-RoBERTa. Firstly, a RoBERTa pre-training model was used to obtain the word vector. Then, the Siamese network structure was used to train the text and label vectors respectively, and the label information was mapped to the text through interactive attention, so as to integrate the label information into the model. Finally, an adaptive fusion layer was set to closely fuse the text representation with the label representation for classification. Experimental results on Today Headlines and THUCNews datasets show that compared with mainstream deep learning models such as RA-Labelatt (replacing static word vectors in Label-based attention improved model with word vectors trained by RoBERTa-wwm) and LEMC-RoBERTa (RoBERTa combined with Label-Embedding-based Multi-scale Convolution for text classification), the accuracy of TLIFC-RoBERTa is the highest, and it achieves the best classification performance in user comment datasets.

Table and Figures | Reference | Related Articles | Metrics
Hatch recognition algorithm of bulk cargo ship based on incomplete point cloud normal filtering and compensation
Yumin SONG, Hao SUN, Zhan LI, Chang’an LI, Xiaoshu QIAO
Journal of Computer Applications    2024, 44 (1): 324-330.   DOI: 10.11772/j.issn.1001-9081.2023010051
Abstract127)      PDF (2041KB)(57)       Save

The operating cost of the port can be greatly reduced and economic benefits can be greatly improved by the automatic ship loading system, which is an important part of the smart port construction. Hatch recognition is the primary link in the automatic ship loading task, and its success rate and recognition accuracy are important guarantees for the smooth progress of subsequent tasks. Collected ship point cloud data is often missing due to issues such as the number and angle of the port lidars. In addition, the geometric information of the hatch cannot be expressed accurately by the collected point cloud data because there is often a large amount of material accumulation near the hatch. The recognition success rate of the existing algorithm is significantly reduced due to the frequent problems in the actual ship loading operation of the port mentioned above, which has a negative impact on the automatic ship loading operation. Therefore, it is urgent to improve the success rate of hatch recognition in the case of material interference or incomplete hatch data in the ship point cloud. A hatch recognition algorithm of bulk cargo ship based on incomplete point cloud normal filtering and compensation was proposed, by analyzing the ship structural features and point cloud data collected during the automatic ship loading process. Experiments were carried out to verify that the recognition success rate and recognition accuracy are improved compared with Miao’s and Li’s hatch recognition algorithms. The experimental results show that the proposed algorithm can not only filter out the material noise in the hatch, but also compensate for the missing data, which can effectively improve the hatch recognition effect.

Reference | Related Articles | Metrics
Spatial-temporal co-occurrence pattern mining algorithm for video data
Xiaoyu ZHANG, Ziqiang YU, Chengdong LIU, Bohan LI, Changfeng JING
Journal of Computer Applications    2023, 43 (8): 2330-2337.   DOI: 10.11772/j.issn.1001-9081.2022101566
Abstract256)   HTML18)    PDF (5225KB)(206)       Save

Spatial-temporal co-occurrence patterns refer to the video object combinations with spatial-temporal correlations. In order to mine the spatial-temporal co-occurrence patterns meeting the query conditions from a huge volume of video data quickly, a spatial-temporal co-occurrence pattern mining algorithm with a triple-pruning matching strategy — Multi-Pruning Algorithm (MPA) was proposed. Firstly, the video objects were extracted in a structured way by the existing video object detection and tracking models. Secondly, the repeated occurred video objects extracted from a sequence of frames were stored and compressed, and an index of the objects was created. Finally, a spatial-temporal co-occurrence pattern mining algorithm based on the prefix tree was proposed to discover the spatial-temporal co-occurrence patterns that meet query conditions. Experimental results on real and synthetic datasets show that the proposed algorithm improves the efficiency by about 30% compared with Brute Force Algorithm (BFA), and the greater the data volume, the more obvious the efficiency improvement. Therefore, the proposed algorithm can discover the spatial-temporal co-occurrence patterns satisfying the query conditions from a large volume of video data quickly.

Table and Figures | Reference | Related Articles | Metrics
Blow-CAST-Fish key recovery attack based on differential tables
Xiaoling SUN, Shanshan LI, Guang YANG, Qiuge YANG
Journal of Computer Applications    2022, 42 (9): 2742-2749.   DOI: 10.11772/j.issn.1001-9081.2021071340
Abstract258)   HTML2)    PDF (1646KB)(69)       Save

Aiming at the problems of limited attack rounds and high attack complexity of Blow-CAST-Fish (Blow-C.Adams S.Tavares-Fish) algorithm, a key recovery attack of Blow-CAST-Fish algorithm based on differential table was proposed. Firstly, after analyzing the collision of S-box, based on the collision of two S-boxes and a single S-box respectively, the 6-round and 12-round differential characteristics were constructed. Secondly, the differential tables of f3 were calculated, and three rounds were expanded based on the specific differential characteristic, thereby determining the relationship between ciphertext difference and the input and output differences of f3. Finally, the plaintexts meeting the conditions were selected to encrypt, the input and output differences of f3 were calculated according to the ciphertext difference, and the corresponding input and output pairs were found by querying the differential table, as a result, the subkeys were obtained. At the situation of two S-boxes collision, the proposed attack completed a differential attack of 9-round Blow-CAST-Fish algorithm, compared with the comparison attack, the number of attack rounds was increased by one, and the time complexity was reduced from 2107.9 to 274. At the situation of single S-box collision, the proposed attack completed a differential attack of 15-round Blow-CAST-Fish algorithm, compared with the comparison attack, although the number of attack rounds was reduced by one, the proportion of weak keys was increased from 2 - 52.4 to 2 - 42 and the data complexity was reduced from 254 to 247. The test results show that the attack based on differential table can increase the efficiency of attack based on the same differential characteristics.

Table and Figures | Reference | Related Articles | Metrics
Stock market volatility prediction method based on graph neural network with multi-attention mechanism
Xiaohan LI, Jun WANG, Huading JIA, Liu XIAO
Journal of Computer Applications    2022, 42 (7): 2265-2273.   DOI: 10.11772/j.issn.1001-9081.2021081487
Abstract1053)   HTML27)    PDF (2246KB)(359)       Save

Stock market is an essential element of financial market, therefore, the study on volatility of stock market plays a significant role in taking effective control of financial market risks and improving returns on investment. For this reason, it has attracted widespread attention from both academic circle and related industries. However, there are multiple influencing factors for stock market. Facing the multi-source and heterogeneous information in stock market, it is challenging to find how to mine and fuse multi-source and heterogeneous data of stock market efficiently. To fully explain the influence of different information and information interaction on the price changes in stock market, a graph neural network based on multi-attention mechanism was proposed to predict price fluctuation in stock market. First of all, the relationship dimension was introduced to construct heterogeneous subgraphs for the transaction data and news text of stock market, and multi-attention mechanism was adopted for fusion of the graph data. Then, the graph neural network Gated Recurrent Unit (GRU) was applied to perform graph classification. On this basis, prediction was made for the volatility of three important indexes: Shanghai Composite Index, Shanghai and Shenzhen 300 Index, Shenzhen Component Index. Experimental results show that from the perspective of heterogeneous information characteristics, compared with the transaction data of stock market, the news information of stock market has the lagged influence on stock volatility; from the perspective of heterogeneous information fusion, compared with algorithms such as Support Vector Machine (SVM), Random Forest (RF) and Multiple Kernel k-Means (MKKM) clustering, the proposed method has the prediction accuracy improved by 17.88 percentage points, 30.00 percentage points and 38.00 percentage points respectively; at the same time, the quantitative investment simulation was performed according to the model trading strategy.

Table and Figures | Reference | Related Articles | Metrics
Real-time semantic segmentation method based on squeezing and refining network
Juan WANG, Xuliang YUAN, Minghu WU, Liquan GUO, Zishan LIU
Journal of Computer Applications    2022, 42 (7): 1993-2000.   DOI: 10.11772/j.issn.1001-9081.2021050812
Abstract311)   HTML16)    PDF (2950KB)(129)       Save

Aiming at the problem that the current semantic segmentation algorithms are difficult to reach the balance between real-time reasoning and high-precision segmentation, a Squeezing and Refining Network (SRNet) was proposed to improve real-time performance of reasoning and accuracy of segmentation. Firstly, One-Dimensional (1D) dilated convolution and bottleneck-like structure unit were introduced into Squeezing and Refining (SR) unit, which greatly reduced the amount of calculation and the number of parameters of model. Secondly, the multi-scale Spatial Attention (SA) confusing module was introduced to make use of the spatial information of shallow layer features efficiently. Finally, the encoder was formed through stacking SR units, and two SA units were used to form the decoder. Simulation shows that SRNet obtains 68.3% Mean Intersection over Union (MIoU) on Cityscapes dataset with only 30 MB parameters and 8.8×109 FLoating-point Operation Per Second (FLOPS). Besides, the model reaches a forward reasoning speed of 12.6 Frames Per Second (FPS) with input pixel size of 512×1 024×3 on a single NVIDIA Titan RTX card. Experimental results imply that the designed lightweight model SRNet reaches a good balance between accurate segmentation and real-time reasoning, and is suitable for scenarios with limited computing power and power consumption.

Table and Figures | Reference | Related Articles | Metrics
Stock market volatility prediction method based on improved genetic algorithm and graph neural network
Xiaohan LI, Huading JIA, Xue CHENG, Taiyong LI
Journal of Computer Applications    2022, 42 (5): 1624-1633.   DOI: 10.11772/j.issn.1001-9081.2021030519
Abstract513)   HTML22)    PDF (1762KB)(221)       Save

Aiming at the difficulty in selecting stock valuation features and the lack of time series relational dimension features during the prediction of stock market volatility by intelligent algorithms such as Support Vector Machine (SVM) and Long Short-Term Memory (LSTM) network, in order to accurately predict stock volatility and effectively prevent financial market risks, a new stock market volatility prediction method based on Improved Genetic Algorithm (IGA) and Graph Neural Network (GNN) named IGA-GNN was proposed. Firstly, the data of stock market trading index graph was constructed based on the time series relation between adjacent trading days. Secondly, the characteristics of evaluation indexes were used to improve Genetic Algorithm (GA) by optimizing crossover and mutation probabilities, thereby realizing the node feature selection. Then, the weight matrix of edge and node features of graph data was established. Finally, the GNN was used for the aggregation and classification of graph data nodes, and the stock market volatility prediction was realized. In the experiment stage, the studied number of total evaluation indexes of stock was 130, and 87 effective evaluation indexes were extracted from the above by IGA under GNN method, making the number of stock evaluation indexes reduced by 33.08%. The proposed IGA was applied to the intelligent algorithms for feature extraction. The obtained algorithms has the overall prediction accuracy improved by 7.38 percentage points compared with the intelligent algorithms without feature extraction. Compared with applying the traditional GA for feature extraction of the intelligent algorithms, applying the proposed IGA for feature extraction of the intelligent algorithms has the total training time shortened by 17.97%. Among them, the prediction accuracy of IGA-GNN method is the highest, which is 19.62 percentage points higher than that of GNN method without feature extraction. Compared with the GNN method applying the traditional GA for feature extraction, the IGA-GNN method has the training time shortened by 15.97% on average. Experimental results show that, the proposed method can effectively extract stock features and has good prediction effect.

Table and Figures | Reference | Related Articles | Metrics
Service integration method based on adaptive multi‑objective reinforcement learning
Xiao GUO, Chunshan LI, Yuyue ZHANG, Dianhui CHU
Journal of Computer Applications    2022, 42 (11): 3500-3505.   DOI: 10.11772/j.issn.1001-9081.2021122041
Abstract294)   HTML4)    PDF (1403KB)(68)       Save

The current service resources in Internet of Services (IoS) show a trend of refinement and specialization. Services with single function cannot meet the complex and changeable requirements of users. Service integrating and scheduling methods have become hot spots in the field of service computing. However, most existing service integrating and scheduling methods only consider the satisfaction of user requirements and do not consider the sustainability of the IoS ecosystem. In response to the above problems, a service integration method based on adaptive multi?objective reinforcement learning was proposed. In this method, a multi?objective optimization strategy was introduced into the framework of Asynchronous Advantage Actor?Critic (A3C) algorithm, so as to ensure the healthy development of the IoS ecosystem while satisfying user needs. The integrated weight of the multi?objective value was able to adjusted dynamically according to the regret value, which improved the imbalance of sub?objective values in multi?objective reinforcement learning. The service integration verification was carried out in a real large?scale service environment. Experimental results show that the proposed method is faster than traditional machine learning methods in large?scale service environment, and has a more balanced solution quality of each objective compared with Reinforcement Learning (RL) with fixed weights.

Table and Figures | Reference | Related Articles | Metrics
Event‑driven dynamic collection method for microservice invocation link data
Peng LI, Zhuofeng ZHAO, Han LI
Journal of Computer Applications    2022, 42 (11): 3493-3499.   DOI: 10.11772/j.issn.1001-9081.2021101735
Abstract281)   HTML3)    PDF (2643KB)(102)       Save

Microservice invocation link data is a type of important data generated in the daily operation of the microservice application system, which records a series of service invocation information corresponding to a user request in the microservice application in the form of link. Microservice invocation link data are generated at different microservice deployment nodes due to the distribution characteristic of the system, and the current collection methods for these distributed data include full collection and sampling collection. Full collection may bring large data transmission and data storage costs, while sampling collection may miss critical invocation data. Therefore, an event?driven and pipeline sampling based dynamic collection method for microservice invocation link data was proposed, and a microservice invocation link system that supports dynamic collection of invocation link data was designed and implemented based on the open?source software Zipkin. Firstly, the pipeline sampling was performed on the link data of different nodes that met the predefined event features, that is the same link data of all nodes were collected by the data collection server only when the event defined data was generated by a node; meanwhile, to address the problem of inconsistent data generation rates of different nodes, multi?threaded streaming data processing technology based on time window and data synchronization technology were used to realize the data collection and transmission of different nodes. Finally, considering the problem that the link data of each node arrives at the server in different sequential order, the synchronization and summary of the full link data were realized through the timing alignment method. Experimental results on the public microservice lrevocation ink dataset prove that compared to the full collection and sampling collection methods, the proposed method has higher accuracy and more efficient collection on link data containing specific events such as anomalies and slow responces.

Table and Figures | Reference | Related Articles | Metrics
Consensus transaction trajectory visualization tracking method for Fabric based on custom logs
Shanshan LI, Yanze WANG, Yinglong ZOU, Huanlei CHEN, He ZHANG, Ou WU
Journal of Computer Applications    2022, 42 (11): 3421-3428.   DOI: 10.11772/j.issn.1001-9081.2021111935
Abstract275)   HTML5)    PDF (3301KB)(79)       Save

Concerning that the federated chain lacks visualization methods to show the resource usage, health status, mutual relationship and consensus transaction process of each node, a Fabric consensus transaction Tracking method based on custom Log (FTL) was proposed. Firstly, Hyperledger Fabric, a typical federation framework, was used as the infrastructure to build the bottom layer of FTL. Then, the custom consensus transaction logs of the Fabric were collected and parsed by using the ELK (Elasticsearch, Logstash, Kibana) tool chain, and Spring Boot was used as the business logic processing framework. Finally, Graphin which focuses on graph analysis was utilized to realize the visualization of consensus trade trajectory. Experimental results show that compared with native Fabric applications, FTL Fabric?based application framework only experienced an 8.8% average performance decline after the implementation of visual tracking basis without significant latency, which can provide a more intelligent blockchain supervision solution for regulators.

Table and Figures | Reference | Related Articles | Metrics
Survey of named data networking
Hongqiao MA, Wenzhong YANG, Peng KANG, Jiankang YANG, Yuanshan LIU, Yue ZHOU
Journal of Computer Applications    2022, 42 (10): 3111-3123.   DOI: 10.11772/j.issn.1001-9081.2021091576
Abstract606)   HTML36)    PDF (2976KB)(318)       Save

The unique advantages of Named Data Networking (NDN) make it a candidate for the next generation of new internet architecture. Through the analysis of the communication principle of NDN and the comparison of it with the traditional Transmission Control Protocol/Internet Protocol (TCP/IP) architecture, the advantages of the new architecture were described. And on this basis, the key elements of this network architecture design were summarized and analyzed. In addition, in order to help researchers better understand this new network architecture, the successful applications of NDN after years of development were summed up. Following the mainstream technology, the support of NDN for cutting-edge blockchain technology was focused on. Based on this support, the research and development of the applications of NDN and blockchain technology were discussed and prospected.

Table and Figures | Reference | Related Articles | Metrics
Comparative density peaks clustering algorithm with automatic determination of clustering center
GUO Jia, HAN Litao, SUN Xianlong, ZHOU Lijuan
Journal of Computer Applications    2021, 41 (3): 738-744.   DOI: 10.11772/j.issn.1001-9081.2020071071
Abstract517)      PDF (2809KB)(546)       Save
In order to solve the problem that the clustering centers cannot be determined automatically by Density Peaks Clustering (DPC) algorithm, and the clustering center points and the non-clustering center points are not obvious enough in the decision graph, Comparative density Peaks Clustering algorithm with Automatic determination of clustering center (ACPC) was designed. Firstly, the distance parameter was replaced by the distance comparison quantity, so that the potential clustering centers were more obvious in the decision graph. Then, the 2D interval estimation method was used to perform the automatic selection of clustering centers, so as to realize the automation of clustering process. Experimental results show that the ACPC algorithm has better clustering effect on four synthetic datasets; and the comparison of the Accuracy indicator on real datasets shows that on the dataset Iris, the clustering accuracy of ACPC can reach 94%, which is 27.3% higher than that of the traditional DPC algorithm, and the problem of selecting clustering centers interactively is solved by ACPC.
Reference | Related Articles | Metrics
3D model shape optimization method based on optimal minimum spanning tree
HAN Li, LIU Shuning, YU Bing, XU Shengsi, TANG Di
Journal of Computer Applications    2019, 39 (3): 858-863.   DOI: 10.11772/j.issn.1001-9081.2018081710
Abstract390)      PDF (975KB)(260)       Save

For the efficient shape analysis of massive, heterogeneous and complex 3D models, an optimization method for 3D model shape based on optimal minimum spanning tree was proposed. Firstly, a model description based on 3D model Minimum Spanning Tree (3D-MST) was constructed. Secondly, local optimization was realized by topology and geometry detection and combination of bilateral filtering and entropy weight distribution, obtaining optimized MST representation of the model. Finally, the shape analysis and similarity detection of the model were realized by optimized Laplacian spectral characteristics and Thin Plate Spline (TPS). The experimental results show that the proposed method not only effectively preserves shape features of the model, but also effectively realizes sparse optimization representation of the complex model, improving the efficiency and robustness of geometric processing and shape retrieval.

Reference | Related Articles | Metrics
Local binary pattern based on dominant gradient encoding for pollen image recognition
XIE Yonghua, HAN Liping
Journal of Computer Applications    2018, 38 (6): 1765-1770.   DOI: 10.11772/j.issn.1001-9081.2017112791
Abstract512)      PDF (1090KB)(375)       Save
Influenced by the microscopic sensors and irregular collection method, the pollen images are often disturbed by different degrees of noise and have rotation changes with different angles, which leads to generally low recognition accuracy. In order to solve the problem, a Dominant Gradient encoding based Local Binary Pattern (DGLBP) descriptor was proposed and applied to the recognition of pollen images. Firstly, the gradient magnitude of an image block in the dominant gradient direction was calculated. Secondly, the radial, angular and multiple gradient differences of the image block were calculated separately. Then, the binary coding was performed according to the gradient differences of each image block. The binary coding was assigned weights adaptively with reference to the texture distribution of each local region, and the texture feature histograms of pollen images in three directions were extracted. Finally, the texture feature histograms under different scales were fused, and the Euclidean distance was used to measure the similarity between images. The average correct recognition rates of DGLBP on datasets of Confocal and Pollenmonitor are 94.33% and 92.02% respectively, which are 8.9 percentage points and 8.6 percentage points higher on average than those of other compared pollen recognition methods, 18 percentage points and 18.5 percentage points higher on average than those of other improved LBP-based methods. The experimental results show that the proposed DGLBP descriptor is robust to noise and rotation change of pollen images, and has a better recognition effect.
Reference | Related Articles | Metrics
Parallel algorithm for hillshading under multi-core computing environment
HAN Litao, LIU Hailong, KONG Qiaoli, YANG Fanlin
Journal of Computer Applications    2017, 37 (7): 1911-1915.   DOI: 10.11772/j.issn.1001-9081.2017.07.1911
Abstract464)      PDF (1000KB)(364)       Save
Most of the exiting hillshading algorithms are implemented based on single-core single-thread programming model, which makes them have lower computational efficiency. To solve this problem, an improved algorithm for parallelizing the existing hillshading algorithms based on multi-core programming model was proposed. Firstly, the original Digital Elevation Model (DEM) data were divided into several data blocks by grid segmentation. Secondly, these data blocks were shaded in parallel using the class Parallel under the .Net environment to generate shaded image of each block. Finally, the shaded images were spliced into a complete hillshading image. The experimental results show that the calculation efficiency of the improved parallelized algorithm is obviously higher than that of the existing shading algorithms based on single-core single-thread programming, and there is an approximate linear growth relation between the number of the involved cores and the shading efficiency. Additionally, it is also found that the three dimensional and realistic effect of the hillshading image is extremely relevant to the parameter setting of the light source.
Reference | Related Articles | Metrics
New improved algorithm for superword level parallelism
ZHANG Suping, HAN Lin, DING Lili, WANG Pengxiang
Journal of Computer Applications    2017, 37 (2): 450-456.   DOI: 10.11772/j.issn.1001-9081.2017.02.0450
Abstract558)      PDF (1269KB)(472)       Save
For SLP (Superword Level Parallelism) algorithm cannot effectively process the large-scale applications covered with few parallel codes, and the codes which can be vectorized may be adverse to vectorization. A new improved algorithm for SLP was proposed, namely NSLPO. First of all, the non-isomorphic statements which cannot be vectorized were transformed to isomorphic statements as far as possible, thus locating the opportunities of vectorization which SLP has lost. Secondly, the Max Common Subgraph (MCS) was built by adding redundant nodes, and the supplement diagram of SLP was got by using some optimization such as redundancy deleting, which can greatly increase the parallelism of program. At last, the codes which are harmful to vectorization were exclued out of vectorization by using cutting method and executed in serial, only the valuable codes for vectorization were vectorized to improve the efficiency of programs as far as possible. Experiments were conducted on widely used kernel test sets. The experimental results show that compared with the SLP algorithm, the proposed NSLPO algorithm has better performance and its running time was reduced by 9.1%.
Reference | Related Articles | Metrics
Pencil drawing rendering based on textures and sketches
SUN Yuhong, ZHANG Yuanke, MENG Jing, HAN Lijuan
Journal of Computer Applications    2016, 36 (7): 1976-1980.   DOI: 10.11772/j.issn.1001-9081.2016.07.1976
Abstract413)      PDF (853KB)(305)       Save
Concerning the problem in pencil drawing generation that the pencil lines lack flexibility and textures lack directions, a method of combining directional textures and pencil sketches was proposed to produce pencil drawing from natural images. First, histogram matching was employed to obtain the tone map of the image, and an image was segmented into several regions according to color. For each region, tone and direction were computed by its color and its shape, to decide the final tone and direction of the pencil drawing. Then, an adjusted linear convolution was used to get the pencil sketches with certain randomness. Finally, the directional textures and sketches were combined to get the pencil drawing style. Different kinds of natural images could be converted to pencil drawings by the proposed method, and the renderings were compared with those of existing methods including line integral convolution and tone based method. The experimental results demonstrate that the directional texture can stimulate the manual pencil texture better and the adjusted sketches can mimic the randomness and flexibility of manual pencil drawings.
Reference | Related Articles | Metrics
Vector exploring path optimization algorithm of superword level parallelism with subsection constraints
XU Jinlong, ZHAO Rongcai, HAN Lin
Journal of Computer Applications    2015, 35 (4): 950-955.   DOI: 10.11772/j.issn.1001-9081.2015.04.0950
Abstract769)      PDF (877KB)(576)       Save

Superword Level Parallelism (SLP) is a vector parallelism exploration approach for basic block. With loop unrolling, more parallel possibility can be explored. Simultaneously too much exploring paths are brought in. In order to solve the above problem, an optimized SLP method with subsection constraints was proposed. Redundant elimination on segmentation was used to obtain homogeneous segments. Inter-section exploring method based on SLP was used to restrain exploring paths and reduce the complexity of the algorithm. And finally pack adjustment was used to deal with the situation of overlap on memory access. The experimental results show that the vectorization capability of SLP is enhanced; for the test serial program, the average speedup of vectorized version is close to 2.

Reference | Related Articles | Metrics
Relay selection and power allocation optimization algorithm based on long-delay channel in underwater wireless sensor networks
LIU Zixin JIN Zhigang SHU Yishan LI Yun
Journal of Computer Applications    2014, 34 (7): 1951-1955.   DOI: 10.11772/j.issn.1001-9081.2014.07.1951
Abstract230)      PDF (648KB)(436)       Save

In order to deal with the channel fading in Underwater Wireless Sensor Networks (UWSN) changing randomly in time-space-frequency domain, underwater cooperative communication model with relays was proposed in this paper to improve reliability and obtain diversity gain of the communication system. Based on the new model, a relay selection algorithm for UWSN was proposed. The new relay selection algorithm used new evaluation criteria to select the best relay node by considering two indicators: channel gain and long delay. With the selected relay node, source node and relay nodes could adjust their sending power by the power allocation algorithm which was based on the principle of minimizing the bit error rate. In a typical scenario, by comparing with the traditional relay selecting algorithm and equal power allocation algorithm, the new algorithm reduces the delay by 16.7% and lowers bit error rate by 1.81dB.

Reference | Related Articles | Metrics
Logistics service supply chain coordination based on forecast-commitment contract
HE Chan LIU Wei
Journal of Computer Applications    2013, 33 (11): 3271-3275.  
Abstract552)      PDF (810KB)(357)       Save
To coordinate the logistics service supply chain, composed by a sub-contractor with single function and an integrator, a forecast-commitment contract was proposed. In this contract, a forecast for a future order and a guarantee to purchase a portion of it were provided by the logistics service integrator. Base on the information from the integrator, the logistics services sub-contractor made a decision on logistics capabilities investment. It provided an optimal strategy for the logistics service sub-contractor and gave the optimal forecast for the logistics service integrator. Then a buyback parameter was drawn into the "forecast-commitment" contract. The experimental results show that if the parameters are reasonable, the proposed contract can moderate the logistics services sub-contractor to invest. It shows that this contract can coordinate the whole system by achieving Pareto improvement for the logistics service supply chain and the increase in revenue for the supply chain system and integrator. The buyback parameter can improve the logistics capabilities investment of the sub-contractor at the same forecast. Finally, a numerical experiment was carried out to illustrate the forecast-commitment contract, and results verified the theoretical analysis.
Related Articles | Metrics
Extraction of fetal electrocardiogram signal utilizing blind source separation based on time-frequency distributions and wavelet packet denoising
HAN Liang PU Xiujuan
Journal of Computer Applications    2013, 33 (08): 2394-2396.  
Abstract575)      PDF (629KB)(370)       Save
A new method utilizing Blind Source Separation based on Time-Frequency distributions (TFBSS) and wavelet packet denosing was proposed to extract the Fetal ElectroCardioGram (FECG) signal. The original eight ElectroCardioGram (ECG) signals obtained from the thoracic and abdominal area of the pregnant woman were firstly processed to eight components by utilizing rearrangement TFBSS. Then the Maternal ECG (MECG) signal and noise components in eight components were set by zero and the rest components were reconstructed by using the mixing matrix. The FECG with noise could be extracted by separating the reconstructed result by using rearrangement TFBSS. Finally, the baseline shift and noise in FECG were suppressed by wavelet packet denoising technique. The FECG could be extracted even under the condition of the fetal QRS wave being partly and entirely overlapped with the maternal QRS wave in the abdominal composite signal. The experimental results show that the clear FECG can be extracted by utilizing the proposed method.
Related Articles | Metrics
Implementation of gray level error conpensation for optical 4f system
HAN Liang JIANG Ziqi PU Xiujuan
Journal of Computer Applications    2013, 33 (07): 1973-1975.   DOI: 10.11772/j.issn.1001-9081.2013.07.1973
Abstract1084)      PDF (718KB)(675)       Save
To compensate the gray level error in optical 4f system, a method for gray level error compensation based on histogram matching and Radial Basis Function (RBF) neural network was proposed. The nonlinear transformation of histogram between input and output images in optical 4f System was fitted by RBF neural network, then the optimal estimation of curve for histogram matching between input and output images was obtained. The gray level error compensation image was obtained by utilizing histogram matching according to the optimal estimation of curve for histogram matching. The average Peak Signal-to-Noise Ratio (PSNR) gain achieved was 2.96dB and the visual effect of images processed was improved by utilizing the proposed method in actual optical 4f system. The experimental results show the gray level error in optical 4f system can be compensated effectively and the precision of optical information processing was improved by the proposed method.
Reference | Related Articles | Metrics
Cluster Head Extraction for Data Compression in Wireless Sensor Networks
LIN Wei LI Bo HAN Li-hong
Journal of Computer Applications    2012, 32 (12): 3482-3485.   DOI: 10.3724/SP.J.1087.2012.03482
Abstract849)      PDF (732KB)(454)       Save
Douglas-Peucker (DP) compression algorithm of vector data compression algorithm was introduced to wireless sensor networks, at the same time for the number of scans of the data compression process, the paper put forward an improved cluster head extraction for data compression algorithm, and the cluster head was called data cluster head. Cluster head extraction compression algorithm reduced the number of data scan in compression process by setting step, and used the optimum curve fitting method for monitoring data point to do linear optimization fitting, according to the attachment relationship of the data, and extracted the cluster head data that reflected the overall characteristics; meanwhile, the subgroups of non-cluster head data subgroups were divided. The simulation results show that, the process of cluster head extraction compression algorithm is simpler; for the large fluctuation data it has a better cluster head extraction effect; besides, it reduces the amount of network data transmission, and effectively saves the energy consumption across the network.
Related Articles | Metrics
Dynamic Model of Heavy-Equipment Airdrop and Control Design
HE Lei SUN Xiu-xia DONG Wen-han LI Da-dong
Journal of Computer Applications    2012, 32 (11): 3235-3239.   DOI: 10.3724/SP.J.1087.2012.03235
Abstract851)      PDF (734KB)(463)       Save
In view of the difference between present heavy-equipment airdrop mathematical model and the real system, the separation body method was improved. Considering cargo as rigid body, the point by constraint reaction of cargo was modified and the parachute force direction was presented, so a more accurate method of calculating disturbance torque was proposed which covered many influential factors including angle of cabin floor and friction coefficient. Then a more realistic dynamic model of heavy-equipment airdrop was established. The control laws of pitch altitude hold mode and velocity hold mode were designed and their parameters were selected by genetic algorithm. The simulation results indicate that the proposed control scheme can effectively hold flight path and aircraft attitude.
Reference | Related Articles | Metrics
Curtain dynamic simulation based on particle constraint algorithm
HAN Li JIA Yue
Journal of Computer Applications    2012, 32 (10): 2806-2808.   DOI: 10.3724/SP.J.1087.2012.02806
Abstract772)      PDF (500KB)(418)       Save
Concerning the cloth undulation in dealing with the super-elasticity, an improved particle constraint algorithm based on mass-spring model was presented. In an iteration cycle, it only adjusted the longest stretch spring, and the other over-stretched springs adaptively made the velocity change. The paper took curtain dynamic simulation as example to implement this algorithm, and the curtain flapped naturally in the wind. The experimental results demonstrate that the proposed algorithm can effectively avoid unrealistic undulation and has strong stability.
Reference | Related Articles | Metrics
Prediction on dispatching number of equipment maintenance people based on main factor method
SHAN Li-li ZHANG Hong-jun ZHANG Rui CHENG Kai WANG Zhi-teng
Journal of Computer Applications    2012, 32 (08): 2364-2368.   DOI: 10.3724/SP.J.1087.2012.02364
Abstract850)      PDF (778KB)(342)       Save
In order to forecast the number of equipment maintenance people more easily and validly, a common approach of selecting the features of input vector in Support Vector Machine (SVM) named Main Factor Method (MFM) was proposed. The relevant terms of "main factor", "driving factor", "voluntary action" and "actions' carrier" were defined, based on which the theoretical MFM was constructed. Firstly, the predicting vector's main factor of voluntary actions was setup by "infinitely related principle" and "action purpose" method. Then the driving factors which can be looked as the characteristics of SVM input vector were refined through the selected main factor and "selecting principles of driving factors". The experimental results and comparison with other congeneric methods show that the proposed method can select the more accurate prediction with the value of relative average error 0.0109.
Reference | Related Articles | Metrics
Workflow modeling and simulation for implementation stage of construction project based on Petri net
LI Hai-ling SHI Ben-shan LIU Ke-jian
Journal of Computer Applications    2011, 31 (10): 2828-2831.   DOI: 10.3724/SP.J.1087.2011.02828
Abstract1060)      PDF (736KB)(638)       Save
In order to effectively carry out the workflow management and control, it is very important to build a workflow model which can accurately express the systematicness, dynamics and uncertainty during the implementation stage of a construction project. By analyzing the workflow features and building the workflow conceptual model of the implementation stage of construction project, the workflow model based on the hierarchical timed colored Petri net was presented. By means of the model, some items such as information flow, resource flow, exception handling, duration, and other abstract contents of the implementation stage can be available. It is not only providing a powerful methodology support for the workflow management and control, but also expanding the Petri net modeling in the field of construction engineering. With CPN Tools simulation platform, taking an example of the implementation stage of a general industrial and civil building, the authors built its workflow model for control and management, and at last, verified the correctness and effectiveness of the model.
Related Articles | Metrics
Path planning of unmanned aerial vehicle
CHEN Hai-han LIU Yin DU Yun-lei
Journal of Computer Applications    2011, 31 (09): 2574-2576.   DOI: 10.3724/SP.J.1087.2011.02574
Abstract1139)      PDF (495KB)(635)       Save
Path planning is designed to make use of terrain and enemy and other information to plan out the largest survival probability penetration trajectory of Unmanned Aerial Vehicle (UAV). After analyzing the simulation needs of path planning, the path planning of UAV was studied. Firstly, a Voronoi diagram was constructed based on the battle field environment full of threats. The Voronoi diagram yields the optimal routes to travel among a set of threat source points to avoid the threats. Then, Dijkstra algorithm was used to search the optimal route. Finally, the simulation system of path planning was carried out on the platform of Visual Studio .Net 2010 based on MS SQL Server 2008 database and Visual C # 2008 language, and the simulation result was given in graph form, which provided a good basis for further study.
Related Articles | Metrics
Image denoising model in combination with partial differential equation and median filtering
WAN Shan LI Lei-min HUANG Yu-qing
Journal of Computer Applications    2011, 31 (09): 2512-2514.   DOI: 10.3724/SP.J.1087.2011.02512
Abstract1503)      PDF (522KB)(410)       Save
The denoising model based on Partial Differential Equation (PDE) model cannot eliminate impulse noise and low-order PDE will produce blocky effect. In order to solve these problems, a denoising model combining PDE and adaptive median filtering was proposed. Through analyzing the image gradient, this model used second order model to denoise at the region with obvious gradient change and the region with tiny gradient change. At the smooth region, fourth order model was used to denoise. The region of the impulse noise was localized by making use of the characteristic that the gradient of the impulse noise is far bigger than the gradient of the edge. At this region, the adaptive median filtering was used to eliminate impulse noise. This method can eliminate impulse noise and protect the image edge effectively. It also can overcome the blocky effect and improve the denoising efficiency. The experiments prove the validity of the model.
Related Articles | Metrics